8 research outputs found

    Legal Risks of Adversarial Machine Learning Research

    Get PDF
    Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.Now, research on adversarial machine learning is booming but it is not without risks. Studying or testing the security of any operational system may violate the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. The CFAA’s broad scope, rigid requirements, and heavy penalties, critics argue, has a chilling effect on security research. Adversarial ML security research is likely no different. However, prior work on adversarial ML research and the CFAA is sparse and narrowly focused. In this article, we help address this gap in the literature. For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research. We also conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term

    Politics of Adversarial Machine Learning

    Get PDF
    In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian endsComment: Authors ordered alphabetically; 4 page

    Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning

    Get PDF
    This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as "real world." Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges.Comment: Accepted to NeurIPS 2020 Workshop on Dataset Curation and Security; Also accepted at Navigating the Broader Impacts of AI Research Workshop. All authors contributed equally. The list of authors is arranged alphabeticall

    Abstracts of National Conference on Research and Developments in Material Processing, Modelling and Characterization 2020

    No full text
    This book presents the abstracts of the papers presented to the Online National Conference on Research and Developments in Material Processing, Modelling and Characterization 2020 (RDMPMC-2020) held on 26th and 27th August 2020 organized by the Department of Metallurgical and Materials Science in Association with the Department of Production and Industrial Engineering, National Institute of Technology Jamshedpur, Jharkhand, India. Conference Title: National Conference on Research and Developments in Material Processing, Modelling and Characterization 2020Conference Acronym: RDMPMC-2020Conference Date: 26–27 August 2020Conference Location: Online (Virtual Mode)Conference Organizer: Department of Metallurgical and Materials Engineering, National Institute of Technology JamshedpurCo-organizer: Department of Production and Industrial Engineering, National Institute of Technology Jamshedpur, Jharkhand, IndiaConference Sponsor: TEQIP-
    corecore